Goto

Collaborating Authors

 eventual successor


NeRF: An Eventual Successor for Deepfakes? - Metaphysic.ai

#artificialintelligence

We'll take a deeper look at this proprietary technique when we chat with its creator, in a later article on autoencoder-based deepfakes. However, results as impressive as these are difficult to obtain with standard open source deepfakes software; require expensive and powerful hardware; and usually entail very long training times to obtain very limited sequences. Machine learning models are trained and developed within the capacity of the VRAM and tensor cores on a single video card -- a prospect that becomes more and more challenging in the age of hyperscale datasets, and which presents some specific obstacles to improving deepfake quality. Approaches that shunt training cycles to the CPU, or divide the workload up among multiple GPUs via Data Parallelism or Model Parallelism techniques (we'll examine these more closely in a later article) are still in the early stages. For the near future, a single-GPU training setup remains the most common scenario.